Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
J Clin Med ; 11(15)2022 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-35956042

RESUMO

The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.

2.
Atten Percept Psychophys ; 84(4): 1370-1392, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35437703

RESUMO

Humans have a remarkable capacity for perceiving and producing rhythm. Rhythmic competence is often viewed as a single concept, with participants who perform more or less accurately on a single rhythm task. However, research is revealing numerous sub-processes and competencies involved in rhythm perception and production, which can be selectively impaired or enhanced. To investigate whether different patterns of performance emerge across tasks and individuals, we measured performance across a range of rhythm tasks from different test batteries. Distinct performance patterns could potentially reveal separable rhythmic competencies that may draw on distinct neural mechanisms. Participants completed nine rhythm perception and production tasks selected from the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA), the Beat Alignment Test (BAT), the Beat-Based Advantage task (BBA), and two tasks from the Burgundy best Musical Aptitude Test (BbMAT). Principal component analyses revealed clear separation of task performance along three main dimensions: production, beat-based rhythm perception, and sequence memory-based rhythm perception. Hierarchical cluster analyses supported these results, revealing clusters of participants who performed selectively more or less accurately along different dimensions. The current results support the hypothesis of divergence of rhythmic skills. Based on these results, we provide guidelines towards a comprehensive testing of rhythm abilities, including at least three short tasks measuring: (1) rhythm production (e.g., tapping to metronome/music), (2) beat-based rhythm perception (e.g., BAT), and (3) sequence memory-based rhythm processing (e.g., BBA). Implications for underlying neural mechanisms, future research, and potential directions for rehabilitation and training programs are discussed.


Assuntos
Percepção Auditiva , Música , Humanos , Memória , Análise e Desempenho de Tarefas
3.
Front Psychol ; 12: 751248, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34925155

RESUMO

Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants' responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.

4.
Front Neurosci ; 15: 558421, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34025335

RESUMO

Introduction: The objective of our study was to evaluate musical perception and its relation to the quality of life in patients with bimodal binaural auditory stimulation. Materials and Methods: Nineteen adult patients with a cochlear implant (CI) for minimum 6 months, and moderate to severe contralateral hearing loss with a hearing aid (HA), and 21 normal hearing adults were included in this prospective, cross-sectional study. Pure-tone and speech audiometry, musical test evaluating sound perception characteristics and musical listening abilities, Munich questionnaire for musical habits, and the APHAB questionnaire were recoded. Performance in musical perception test with HA, CI, and HA + CI, and potential correlations between music test, audiometry and questionnaires were investigated. Results: Bimodal stimulation improved musical perception in several features (sound brightness, roughness, and clarity) in comparison to unimodal hearing, but CI did not add to HA performances in texture, polyphony or musical emotion and even appeared to interfere negatively in pitch perception with HA. Musical perception performances (sound clarity, instrument recognition) appeared to be correlated to hearing-related quality of life (APHAB RV and EC subdomains) but not with speech performances suggesting that the exploration of musical perception complements speech understanding evaluation to better describe every-day life hearing handicap. Conclusion: Testing musical sound perception provides important information on hearing performances as a complement to speech audiometry and appears to be related to hearing-related quality of life.

5.
Cortex ; 130: 78-93, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32645502

RESUMO

For the hemispheric laterality of emotion processing in the brain, two competing hypotheses are currently still debated. The first hypothesis suggests a greater involvement of the right hemisphere in emotion perception whereas the second hypothesis suggests different involvements of each hemisphere as a function of the valence of the emotion. These hypotheses are based on findings for facial and prosodic emotion perception. Investigating emotion perception for other stimuli, such as music, should provide further insight and potentially help to disentangle between these two hypotheses. The present study investigated musical emotion perception in patients with unilateral right brain damage (RBD, n = 16) or left brain damage (LBD, n = 16), as well as in matched healthy comparison participants (n = 28). The experimental task required explicit recognition of musical emotions as well as ratings on the perceived intensity of the emotion. Compared to matched comparison participants, musical emotion recognition was impaired only in LBD participants, suggesting a potential specificity of the left hemisphere for explicit emotion recognition in musical material. In contrast, intensity ratings of musical emotions revealed that RBD patients underestimated the intensity of negative emotions compared to positive emotions, while LBD patients and comparisons did not show this pattern. To control for a potential generalized emotion deficit for other types of stimuli, we also tested facial emotion recognition in the same patients and their matched healthy comparisons. This revealed that emotion recognition after brain damage might depend on the stimulus category or modality used. These results are in line with the hypothesis of a deficit of emotion perception depending on lesion laterality and valence in brain-damaged participants. The present findings provide critical information to disentangle the currently debated competing hypotheses and thus allow for a better characterization of the involvement of each hemisphere for explicit emotion recognition and their perceived intensity.


Assuntos
Música , Córtex Cerebral , Emoções , Expressão Facial , Lateralidade Funcional , Humanos , Reconhecimento Psicológico
6.
Front Hum Neurosci ; 14: 216, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32670038

RESUMO

Past empirical studies have suggested that older adults preferentially use gaze-based mood regulation to lessen their negative experiences while watching an emotional scene. This preference for a low cognitively demanding regulatory strategy leaves open the question of whether the effortful processing of a more cognitively demanding reappraisal task is really spared from the general age-related decline. Because it does not allow perceptual attention to be redirected away from the emotional source, music provides an ideal way to address this question. The goal of our study was to examine the affective, behavioral, physiological, and cognitive outcomes of positive and detached reappraisal in response to negative musical emotion in younger and older adults. Participants first simply listened to a series of threatening musical excerpts and were then instructed to either positively reappraise or to detach themselves from the emotion elicited by music. Findings showed that, when instructed to simply listen to threatening music, older adults reported a more positive feeling associated with a smaller SCL in comparison with their younger counterparts. When implementing positive and detached reappraisal, participants showed more positive and more aroused emotional experiences, whatever the age group. We also found that the instruction to intentionally reappraise negative emotions results in a lesser cognitive cost for older adults in comparison with younger adults. Taken together, these data suggest that, compared to younger adults, older adults engage in spontaneous downregulation of negative affect and successfully implement downregulation instructions. This extends previous findings and brings compelling evidence that, even when auditory attention cannot be redirected away from the emotional source, older adults are still more effective at regulating emotions. Taking into account the age-associated decline in executive functioning, our results suggest that the working memory task could have distracted older adults from the reminiscences of the threat-evoking music, thus resulting in an emotional downregulation. Hence, even if they were instructed to implement reappraisal strategies, older adults might prefer distraction over engagement in reappraisal. This is congruent with the idea that, although getting older, people are more likely to be distracted from a negative source of emotion to maintain their well-being.

7.
J Exp Child Psychol ; 191: 104711, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31770684

RESUMO

Effects of music on language processing have been reported separately for syntax and for semantics. Previous studies have shown that regular musical rhythms can facilitate syntax processing and that semantic features of musical excerpts can influence semantic processing of words. It remains unclear whether musical parameters, such as rhythm and sound texture, may specifically influence different components of linguistic processing. In the current study, two types of musical sequences (one focusing on rhythm and the other focusing on sound texture) were presented to children who were requested to perform a syntax or a semantic task thereafter. The results revealed that rhythmic and textural musical sequences differently influence syntax and semantic processing. For grammaticality judgments, children's performance was better after regular rhythmic sequences than after textural sound sequences. In the semantic evocation task, children produced more numerous and more various concepts after textural sound sequences than after regular rhythmic sequences. These results suggest that rhythm boosts perceptual and cognitive sequencing required in syntax processing, whereas texture promote verbalization and concept activation in verbal production. The findings have implications for the interpretation of musical priming effects and are discussed in the frameworks of dynamic attending and conceptual processing.


Assuntos
Percepção Auditiva/fisiologia , Música , Psicolinguística , Criança , Feminino , Humanos , Testes de Linguagem , Masculino , Semântica
8.
Neuropsychology ; 32(7): 880-894, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30047757

RESUMO

OBJECTIVE: To further our understanding of the role of perceptual processes in musical emotions, we investigated individuals with congenital amusia, a neurodevelopmental disorder that alters pitch processing. METHOD: Amusic and matched control participants were studied for emotion recognition and emotion intensity ratings of both musical excerpts and faces. RESULTS: Emotion recognition was found to be impaired in amusic participants relative to controls for the musical stimuli only. This impairment suggests that perceptual deficits in music processing reduce amusics' access to a verbal and conscious representation of musical emotions. Nevertheless, amusics' performance for emotion recognition was above chance level, and multidimensional scaling (MDS) analyses revealed that their categorization of musical pieces was based on similar representation spaces of emotions as for control participants. The emotion intensity ratings, nonverbal and possibly more implicit than the categorization task, seemed to be intact in amusic participants. CONCLUSIONS: These findings reveal that pitch deficits can hinder the recognition of emotions conveyed by musical pieces, while also highlighting the (at least partial) dissociation between emotion recognition and emotion intensity evaluation. Our study thus sheds light on the complex interactions between perceptual and emotional networks in the brain, by showing that impaired central auditory processing partially alters musical emotion processing. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Transtornos da Percepção Auditiva/psicologia , Emoções , Música/psicologia , Reconhecimento Psicológico , Adulto , Expressão Facial , Reconhecimento Facial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Altura Sonora , Desempenho Psicomotor , Percepção Social , Adulto Jovem
9.
Hear Res ; 337: 89-95, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27240480

RESUMO

UNLABELLED: While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. MATERIAL AND METHODS: Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). RESULTS: EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs.


Assuntos
Percepção Auditiva , Implantes Cocleares , Surdez/reabilitação , Surdez/cirurgia , Percepção da Fala , Adolescente , Adulto , Criança , Pré-Escolar , Implante Coclear , Cognição , Feminino , Humanos , Desenvolvimento da Linguagem , Aprendizagem , Masculino , Pessoa de Meia-Idade
10.
Neuropsychologia ; 85: 10-8, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-26944873

RESUMO

Congenital amusia is a neurodevelopmental disorder of music perception and production, which has been attributed to a major deficit in pitch processing. While most studies and diagnosis tests have used explicit investigation methods, recent studies using implicit investigation approaches have revealed some unimpaired pitch structure processing in congenital amusia. The present study investigated amusic individuals' processing of tonal structures (e.g., musical structures respecting the Western tonal system) via three different questions. Amusic participants and their matched controls judged tonal versions (original musical excerpts) and atonal versions (with manipulated pitch content to remove tonal structures) of 12 musical pieces. For each piece, participants answered three questions that required judgments from different perspectives: an explicit structural one, a personal, emotional one and a more social one (judging the perception of others). Results revealed that amusic individuals' judgments differed between tonal and atonal versions. However, the question type influenced the extent of the revealed structure processing: while amusic individuals were impaired for the question requiring explicit structural judgments, they performed as well as their matched controls for the two other questions. Together with other recent studies, these findings suggest that congenital amusia might be related to a disorder of the conscious access to music processing rather than music processing per se.


Assuntos
Percepção Auditiva/fisiologia , Transtornos da Percepção Auditiva/fisiopatologia , Estado de Consciência/fisiologia , Discriminação Psicológica/fisiologia , Música , Estimulação Acústica , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Tempo de Reação/fisiologia , Estatística como Assunto , Adulto Jovem
11.
Behav Neurol ; 2015: 707625, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26508813

RESUMO

Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information.


Assuntos
Rememoração Mental/fisiologia , Música , Córtex Pré-Frontal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Neuroimagem Funcional , Humanos , Masculino , Espectroscopia de Luz Próxima ao Infravermelho , Adulto Jovem
13.
Neuropsychologia ; 77: 313-20, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26359715

RESUMO

Previous research has indicated that the medial temporal lobe (MTL), and more specifically the perirhinal cortex, plays a role in the feeling of familiarity for non-musical stimuli. Here, we examined contribution of the MTL to the feeling of familiarity for music by testing patients with unilateral MTL lesions. We used a gating paradigm: segments of familiar and unfamiliar musical excerpts were played with increasing durations (250, 500, 1000, 2000, 4000 ms and complete excerpts), and participants provided familiarity judgments for each segment. Based on the hypothesis that patients might need longer segments than healthy controls (HC) to identify excerpts as familiar, we examined the onset of the emergence of familiarity in HC, patients with a right MTL resection (RTR), and patients with a left MTL resection (LTR). In contrast to our hypothesis, we found that the feeling of familiarity was relatively spared in patients with a right or left MTL lesion, even for short excerpts. All participants were able to differentiate familiar from unfamiliar excerpts as early as 500 ms, although the difference between familiar and unfamiliar judgements was greater in HC than in patients. These findings suggest that a unilateral MTL lesion does not impair the emergence of the feeling of familiarity. We also assessed whether the dynamics of the musical excerpt (linked to the type and amount of information contained in the excerpts) modulated the onset of the feeling of familiarity in the three groups. The difference between familiar and unfamiliar judgements was greater for high than for low-dynamic excerpts for HC and RTR patients, but not for LTR patients. This indicates that the LTR group did not benefit in the same way from dynamics. Overall, our results imply that the recognition of previously well-learned musical excerpts does not depend on the integrity of either right or the left MTL structures. Patients with a unilateral MTL resection may compensate for the effects of unilateral damage by using the intact contralateral temporal lobe. Moreover, we suggest that remote semantic memory for music might depend more strongly on neocortical structures rather than the MTL.


Assuntos
Percepção Auditiva/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Reconhecimento Psicológico/fisiologia , Lobo Temporal/fisiopatologia , Estimulação Acústica , Adulto , Emoções/fisiologia , Feminino , Humanos , Julgamento/fisiologia , Masculino , Música , Testes Neuropsicológicos
14.
Front Psychol ; 6: 1316, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26388818

RESUMO

Learning new words is an increasingly common necessity in everyday life. External factors, among which music and social interaction are particularly debated, are claimed to facilitate this task. Due to their influence on the learner's temporal behavior, these stimuli are able to drive the learner's attention to the correct referent of new words at the correct point in time. However, do music and social interaction impact learning behavior in the same way? The current study aims to answer this question. Native German speakers (N = 80) were requested to learn new words (pseudo-words) during a contextual learning game. This learning task was performed alone with a computer or with a partner, with or without music. Results showed that music and social interaction had a different impact on the learner's behavior: Participants tended to temporally coordinate their behavior more with a partner than with music, and in both cases more than with a computer. However, when both music and social interaction were present, this temporal coordination was hindered. These results suggest that while music and social interaction do influence participants' learning behavior, they have a different impact. Moreover, impaired behavior when both music and a partner are present suggests that different mechanisms are employed to coordinate with the two types of stimuli. Whether one or the other approach is more efficient for word learning, however, is a question still requiring further investigation, as no differences were observed between conditions in a retrieval phase, which took place immediately after the learning session. This study contributes to the literature on word learning in adults by investigating two possible facilitating factors, and has important implications for situations such as music therapy, in which music and social interaction are present at the same time.

15.
Pain Manag Nurs ; 16(5): 664-71, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26163741

RESUMO

In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds.


Assuntos
Fadiga/terapia , Fibromialgia/terapia , Musicoterapia/métodos , Manejo da Dor , Dor , Som , Adulto , Meio Ambiente , Exercício Físico , Feminino , Humanos , Pessoa de Meia-Idade , Atividade Motora , Medição da Dor , Descanso , Terapias Sensoriais através das Artes/métodos , Resultado do Tratamento
16.
Front Aging Neurosci ; 7: 11, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25741278

RESUMO

When presented with emotional visual scenes, older adults have been found to be equally capable to regulate emotion expression as younger adults, corroborating the view that emotion regulation skills are maintained or even improved in later adulthood. However, the possibility that gaze direction might help achieve an emotion control goal has not been taken into account, raising the question whether the effortful processing of expressive regulation is really spared from the general age-related decline. Since it does not allow perceptual attention to be redirected away from the emotional source, music provides a useful way to address this question. In the present study, affective, behavioral, and physiological consequences of free expression of emotion, expressive suppression and expressive enhancement were measured in 31 younger and 30 older adults while they listened to positive and negative musical excerpts. The main results indicated that compared to younger adults, older adults reported experiencing less emotional intensity in response to negative music during the free expression of emotion condition. No age difference was found in the ability to amplify or reduce emotional expressions. However, an age-related decline in the ability to reduce the intensity of emotional state and an age-related increase in physiological reactivity were found when participants were instructed to suppress negative expression. Taken together, the current data support previous findings suggesting an age-related change in response to music. They also corroborate the observation that older adults are as efficient as younger adults at controlling behavioral expression. But most importantly, they suggest that when faced with auditory sources of negative emotion, older age does not always confer a better ability to regulate emotions.

18.
Front Psychol ; 5: 1037, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25278924

RESUMO

Inspired by theories of perception-action coupling and embodied music cognition, we investigated how rhythmic music perception impacts self-paced oscillatory movements. In a pilot study, we examined the kinematic parameters of self-paced oscillatory movements, walking and finger tapping using optical motion capture. In accordance with biomechanical constraints accounts of motion, we found that movements followed a hierarchical organization depending on the proximal/distal characteristic of the limb used. Based on these findings, we were interested in knowing how and when the perception of rhythmic music could resonate with the motor system in the context of these constrained oscillatory movements. In order to test this, we conducted an experiment where participants performed four different effector-specific movements (lower leg, whole arm and forearm oscillation and finger tapping) while rhythmic music was playing in the background. Musical stimuli consisted of computer-generated MIDI musical pieces with a 4/4 metrical structure. The musical tempo of each song increased from 60 BPM to 120 BPM by 6 BPM increments. A specific tempo was maintained for 20 s before a 2 s transition to the higher tempo. The task of the participant was to maintain a comfortable pace for the four movements (self-paced) while not paying attention to the music. No instruction on whether to synchronize with the music was given. Results showed that participants were distinctively influenced by the background music depending on the movement used with the tapping task being consistently the most influenced. Furthermore, eight strategies put in place by participants to cope with the task were unveiled. Despite not instructed to do so, participants also occasionally synchronized with music. Results are discussed in terms of the link between perception and action (i.e., motor/perceptual resonance). In general, our results give support to the notion that rhythmic music is processed in a motoric fashion.

19.
Cortex ; 59: 84-94, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25151640

RESUMO

Congenital amusia has been described as a lifelong deficit of music perception and production, notably including amusic individuals' difficulties to recognize a familiar tune without the aid of lyrics. The present study aimed to evaluate whether amusic individuals might have acquired long-term knowledge of familiar music, and to test for the minimal amount of acoustic information necessary to access this knowledge (if any) in amusia. Segments of familiar and unfamiliar instrumental musical pieces were presented with increasing duration (250, 500, 1000 msec etc.), and participants provided familiarity judgments for each segment. Results showed that amusic individuals succeeded in differentiating familiar from unfamiliar excerpts with as little acoustic information as did control participants (i.e., within 500 msec). The findings reveal that amusic individuals have stored musical pieces in long-term memory (LTM), and, together with other recent findings, they suggest that congenital amusia might impair conscious access to music processing rather than music processing per se.


Assuntos
Percepção Auditiva/fisiologia , Transtornos da Percepção Auditiva/fisiopatologia , Música , Reconhecimento Psicológico/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Julgamento/fisiologia , Masculino , Pessoa de Meia-Idade , Tempo de Reação , Adulto Jovem
20.
Cortex ; 60: 82-93, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25023618

RESUMO

Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling.


Assuntos
Tonsila do Cerebelo/fisiologia , Córtex Auditivo/fisiologia , Emoções/fisiologia , Música/psicologia , Estimulação Acústica , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Feminino , Lateralidade Funcional/fisiologia , Humanos , Pessoa de Meia-Idade , Vias Neurais/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...